Since OpenAI launched ChatGPT in late 2022, artificial intelligence (AI) has seemed to dominate nearly every single business discussion. And while AI does have the potential to revolutionize how organizations operate across a number of different dimensions, many leaders today remain unsure of where they should start. They don’t know how to turn the promise of AI into tangible progress, into concrete, practical steps that they can start taking to apply AI in a way that helps them achieve their organizational goals more quickly and economically. 

We sat down to discuss how the National Library of Medicine has found a way to start bridging this gap through small scale AI experimentation. 

 

 

NLM is empowering individuals across the organization to pilot and test AI solutions within their work, helping them uncover opportunities to accelerate their personal activities and then share those findings back out with the rest of the organization.

This has allowed NLM to start their AI deployments with more targeted, specific use cases in mind and create communities of practice that foster knowledge sharing and collaboration, all of which leads to more effective AI investments and implementations. 

To discuss how NLM is using experimentation to explore these AI opportunities, particularly in the customer experience space, we’re joined by Adam Korengold. 

An edited transcript follows.

Isabelle Zdatny: Adam, welcome. Delighted to have you with us.

Adam Korengold: Thanks. It’s good to be here. 

Isabelle Zdatny: Before we dive into this discussion I’d love to have you give us a little bit of background about what the National Library of Medicine is and the role you play there. Set the scene for us. 

Adam Korengold: Sure. And, just to get this out of the way, I’m speaking about my own experience. The opinions that I express are solely mine, and they don’t represent those of the National Library of Medicine, the National Institutes of Health, or my contractor, SEI group. 

But that said, it is a really exciting time to be in this space where AI is coming up as a practice and tool that we can use for insight generation.

The National Library of Medicine is the nation’s preeminent medical library. We are one of 27 institutes at the National Institutes of Health in Bethesda, Maryland, and the National Library of Medicine and its antecedents go back all the way to 1836. In the next decade or so, we’re going to start contemplating our 200th anniversary as an institution. 

As you could imagine, we’ve evolved considerably over the years from the personal collection of the chief surgeon of the US Army to a really preeminent repository of biomedical data and information, including a lot of very old and august writing and monographs about medicine and healthcare. It’s one of the most important institutions for healthcare and bioinformatics in the country, if not the world.

I’m an analytics lead in our IT department, the Office of Computer and Communication Systems. And I run a team that manages our customer experience analytics and our customer analytics. We’re thought partners and subject matter experts in both how those tools actually work and how to use them to generate insight.

Isabelle Zdatny: The place you sit, at that intersection between customer experience and IT, is really perfect for AI deployment. You have a leg in both worlds. It’s a great perspective that others in customer experience might not get to see – all the IT infrastructure pieces that need to be in place to make this work. 

Adam Korengold: Very much so. I’m the least IT competent person in the IT department, But I’m lucky to have a really terrific team surrounding me that is very thoughtful and well-versed in database management and design as well as coding. And so we help each other out very well here.

National Library of Medicine’s Approach to AI

Isabelle Zdatny: Can you characterize the National Library of Medicine’s approach to AI as an institution? Is it more cautious? Do you think you’re going full steam ahead? Trying to maintain some centralized control? Taking a decentralized approach?

Adam Korengold: It’s important to bear in mind that we’re not just a library, we are part of the National Institutes of Health. We’re part of the US Department of Health and Human Services. As a government institution, we have to be really careful about how we’re implementing AI and using AI in a number of different ways.

Anytime we do anything in the cloud – and artificial intelligence and large language models are very much a part of this – we have to think about the implications of sending information into the cloud. 

But that is a major problem to solve for any institutions that deal with a lot of personal information. We have to make sure that any information that goes into any AI system is walled off from training databases because we can’t have any of our proprietary information going off into another proprietary database that we don’t control, especially if it involves any private or personal information.

It’s not as much of an issue for us as some other institutes or organizations that are dealing with people’s personal data. And there are very tight controls in the federal government that prevent us from taking advantage of a lot of technology unless we have very specific, clear working safeguards to make sure that information doesn’t get out.

I think a broader challenge for AI is it’s not free. Even if you’re using a free resource and don’t buy an application called ChatGPT or Gemini, when you use Gemini, Azure, or Google Cloud you are paying for computing and processing power, and you’re paying for storage. Large language models in AI require a lot of both.

It’s very easy to say, “Well, we have to do something with AI,” and to throw a lot of money at an AI consultant or at any one of the major cloud providers and say, “Okay, we’re going to use AI now, because we’re hearing a lot about it. We don’t want to be left behind in this really rapidly advancing new technology.”

Unless you go about it very intentionally and know what you’re using AI for, it’s very easy to be caught in rapidly accelerating costs very, very quickly. We operate on budgets. We don’t want that to happen. I don’t want my bosses to be surprised by a five figure bill from a cloud provider. We have to be very careful about how we approach it. 

Isabelle Zdatny: That makes so much sense as a government agency. You tend to move slower than a lot of other industries. But I think you have some natural guardrails in place because of the way that you already handle data that are useful as you bring AI in.

Some of the natural conservatism in the government might actually help you hit the ground running faster when applying AI because you’re more used to it as an organization.

Adam Korengold: Yeah. It leads us to be much more intentional about how we’re using AI. It becomes less a conversation of, “What are we doing in AI because we don’t want to be left behind,” and more of, “What are the specific things that we would use AI for that will have measurable impact on how quickly we can do things, or how efficiently we can do things, or how fully we can do things.” 

And that reframes the conversation in a very healthy way.

NLM Identifies Opportunities and Limitations of AI through Experimentation

Isabelle Zdatny: So let’s get into it. I opened this by talking about how NLM uses experimentation to generate and test these very specific targeted use cases. Could you talk about that a little bit?

Adam Korengold: I participated in a pilot program that one of the divisions here at NLM ran, and the idea was to develop 10 or 12 specific use cases. And when I say specific use cases, I mean ways of using AI that were very narrowly focused. 

For example, a team wants to use AI to help code things in a quicker and more efficient way, or wants to develop a chat bot that would help customers discover our materials more effectively.

In our use case, my team worked to see if we could take our customer experience survey responses where we run a Qualtrics survey. We had about 300,000 responses over the last three or four years. And of those, we have about 60,000 verbatim responses.

If we were to go through every one of those responses and manually code them into buckets, it would be really hard to get segments that were mutually exclusive, collectively exhaustive, and also to even look at a database that big. Even if my team and I devoted weeks and weeks to that effort, it would be really hard to do it consistently.

But at their basis, large language models are recognition tools. Can we train a large language model to take these responses and then code them into 10 mutually exclusive collectively exhaustive categories?

We found out that we can do that. It’s not perfect yet. We had to start with biting off smaller tranches of that 60,000. If we can get to 2000, 3000, 4,000, 6,000, that’s enough for us to say that the sample is roughly representative of our 60,000 responses. And then we can draw some conclusions from that.

Isabelle Zdatny: In theory, the value of AI is also that it’s continuously learning. So you’re not cutting you – Adam – out of the process. It can take a shot at categorizing them, you can provide it with feedback and increasingly improve outputs through collaboration with it.

So take advantage of what trained CX professionals are good at and what AI is good at. 

Adam Korengold: Right. And I think a really important thing to bear in mind is that we’re talking about artificial intelligence. We’re not talking about artificial sentience. So a lot of the opinions that formed about AI, let’s face it, come from popular culture.

They come from watching Star Trek and seeing Data the Android or watching the Terminator movies. And you see the potential negatives of having this large general AI system. 

We’re not talking about general AI here. We’re talking about a system that is basically a glorified learning search engine. Instead of recognizing keywords and associating those keywords with each other based on how it recognizes similarities and differences between those words, it’s learning over time and it’s learning to recognize patterns. So if we see a sentence, and we see similar enough sentences, and we then give that large language model a shorter sentence, can it think of what the next word logically is or what the next image or thing in the sequence is?

The idea is to make our thinking easier, not handing it off entirely. 

For my use case, it means that my team isn’t going have to spend months and months and months going through this database, but we might be able to have a large language model run for a couple hours on a small subset of data and then say, “Here’s the general tenor of these comments,” or, “Here are the buckets to be broken into,” more confidently than we could if we had done it ourselves.

But it doesn’t eliminate the need to check. When I asked a colleague of mine who’s much more of a coder than I’ll ever be, “What do you think of the coding use case for AI?” And I expected him to say they would never trust it. But what they said was, “Well, it doesn’t eliminate the need for quality assurance and checking, but if this is something that can save me some research time and help me generate a snippet of code that would otherwise be harder for me to figure out, I’m all for it.”

Isabelle Zdatny: I love this approach, because it starts with the end in mind. As you said, everyone’s just like, “AI everything all the time.” And that can get very unwieldy. Start with what’s already on your roadmap, what you’re already trying to achieve, and where there are available AI solutions to help reach those goals faster.

Analyzing verbatims wasn’t the first thing that you were going to do for this pilot, right? 

Adam Korengold: The original approach that we wanted to investigate was around data visualization, which is something that you and I have talked about a lot over the years. Visualizing the data that we have is both the first step as an exploratory tool of analysis, but it’s also the last mile. Where the rubber meets the road, right?

Because when we’re presenting our data to our audiences, they tend to respond a lot better. A picture is literally worth a thousand words. But the challenge with AI is where AIs are right now. They’re not all that good at distilling lots of really complex information into a simple, clear, actionable graphic.

When I have seen AIs do data visualization, it’s been for very complicated scatter plots and very complicated graphics. If I were to put that on a screen in front of my leadership, their eyes are going to glaze over. 

Instead, can we have AI make the process of creating visualizations easier? 

And this was a natural fit, because when people are using their own words and their own language to express how they feel about their interactions with our digital products, it’s nice to be able to translate that into some specific actionable buckets. 

Isabelle Zdatny: It’s augmenting your existing activities. 

Fostering a Culture of Experimentation and Collaboration

Isabelle Zdatny: Can you speak to how NLM has incentivized or encouraged people to experiment with the different tools?

Adam Korengold: It’s an NIH program that offers a few months and a credit for computing power to experiment within the NIH four walls through a Google/Amazon/etc. environment.

We’re very tool agnostic, but different groups within institutes will gravitate towards different cloud providers. And all of them work in similar ways, but they’re also kind of different. It’s useful to see what flavors make more sense. 

We have groups and we have environments that are more oriented towards Amazon. But I have a colleague working to advance our use case in the Amazon environment because we started out working in Microsoft.

We asked, “What if we did pivot this to Amazon and how would that work? What would we use? What would it cost?”

So it’s nice to be able to prove that concept in a limited way. 

Isabelle Zdatny: And it’s not one size fits all.

I’m curious, it sounds like lots of different teams and groups are doing lots of different experiments, trying different things, seeing what works. How are you then sharing your findings with the rest of the organizations and moving NLM up that collaboration curve?

Adam Korengold: There are a lot of formal and informal groups that are popping up across NIH and within different institutes. I can’t speak for what other institutes are doing, but I’m part of a standing community of practice across NIH where there are a couple virtual meetings a month,  where someone’s usually presenting on their particular use case or a particular tool that they’ve learned.

Within my department at NLM, we have an informal biweekly meeting where we share stuff that we’re working on. It’s Thursday happy hour. And I’m sure that there are many other formal and informal groups that are there too.

The other thing that’s really important is that our CIO never fails to mention that she really encourages people to experiment and try new things, innovate. It’s really important to hear that from the top down. 

I’m sure others in the public sector and especially others in IT can identify with just filling out tickets, right? We do what we’re told to do, and it’s really nice to be part of an IT department where we fill out tickets and solve problems that people have come to us with.

But our leadership has been really supportive, telling us to learn how to do new things, learn how to innovate, because that’s where we’re going to add value. It’s important to get comfortable with trying new things.

And sometimes, as you can see from the use case that we’ve talked about today, you might have a really innovative idea that doesn’t work, and that’s okay. You pivot. You wanted to use AI for data visualization. It’s not really there yet, but what could still help you? And then you find another use case based on what you’ve learned already.

This is especially important in a public sector environment where a lot of us are moving from a project management orientation, which means you do things because my boss tells you to do them, or because the law tells you that you have to do them. The way you measure my success is almost a punch list. You were asked to do these three things. You did them on time under budget, you did them fully, and then you moved on to the next set of projects. 

But increasingly, what we’re being asked to do is to be product owners. Especially so in a library, where we are stewards of the products that we provide to the public and to the world at large. We may not be the actual product owner, but we kind of own the technology and the insight generation part of it.

In that sense, it’s less about doing things because people ask us to do them. It’s our job to monitor, investigate, assess over time, and build processes that let us do that so that we don’t have to wait for our boss to tell us when website X needs to be refreshed. We can look at the feedback that we’re getting and see page views are down, or there doesn’t seem to be a lot of the navigation type behavior that we’d want to see. So that would lead us to realize we need to refresh the design and the architecture of this website.

We let our research and analytics and insights tell us that, so that if anything, we’re the ones who are going to the product owners saying, “Hey, when can we work on this?”

Isabelle Zdatny: That’s right. We can shift from that reactive break-fix mode into more predictive prescriptive actions because AI allows us to ingest and process so much more information, which is really interesting for us in the CX space. Sometimes we get pigeonholed into just being responsible for deploying surveys.

I think surveys are still going to be a useful source of input as we move ahead. Unlike behavioral and operational data, surveys capture subjective data about people’s feelings and perceptions, which should still be one input in a broad constellation of data organizations are capturing. Which will then allow them to generate more predictive, prescriptive insights that help other people across the organization make faster, smarter decisions within their roles.

Hopefully AI is going to help us make that switch into a more strategic asset rather than just doing voice of the customer and survey development over here. 

Adam Korengold: And that’s similar to the pattern that a lot of IT departments fall into as well, where historically they’ve been taking orders. Somebody requests a project, and we complete the project. There are certainly parts of any organization that are still like that, but I think it’s an opportunity to become thought partners and leaders in this unique skillset that we do have. And you raise an important point, that the kinds of skills that we’re talking about here are applicable all through the organization, whether you’re doing surveys or not.

You’ll always have people say they don’t have anything to do with customers, they’re developing code, or just talking with other parts of our organization. But all of those people are dealing with customers. They haven’t had it framed in that way yet. So while a coder might never speak to an external customer on the phone, well, the code that they build is how a customer is going to get our information.

Isabelle Zdatny: I would even go further than that and say, as CX professionals, one of our jobs is to improve the experience our organization delivers to customers. Another one is to help our organization achieve its business outcomes by providing it with this constant stream of relevant information.

So even if we’re not supporting backend teams, we can still provide them with information that’s going to help them do their jobs better because we have that view of what’s happening in the world around our institution.

It’s a really interesting time for CX and Experience Management that AI is going to help us accelerate as a discipline.

Adam Korengold: It absolutely is. Yeah.

Key Lessons for XM Professionals

Isabelle Zdatny: So to wrap this up here – what are some of the key lessons that you’ve learned through this process? What advice would you like to share with other Experience Management professionals who are looking to start incorporating AI into their CX activities?

Adam Korengold: I think the most important advice that I could give is some advice that I got when I first started this job. When I started in this role, I was really concerned that I was gonna be leading a team in an IT department. And I’ve had very little training in coding. I am not an IT person. I am an analytics person, I do surveys, I relate insights. And the best advice that I got is to do what you are passionate about and what you’re good at.

You have people all around you who are implementers. If you’re asking yourself: how do I get smart about AI? And do I need to become a Python coding genius? Do I need to learn everything there is to know about how to code chat bots in Google Cloud or AWS?

And the answer to that is: you probably have people around you in your IT department who know whatever systems you’re working on. You can explore this together at the level of competency and comfort that you have.

You can learn enough to be more than useful about what information goes in, what information goes out. And you don’t have to necessarily worry about how to build all the little different switches and components because you have people around you who can. And then it becomes much more of a team effort.

Isabelle Zdatny: When organizations are building AI models, CX professionals can help decide upfront what data you should be putting into the system.

And then on the backend, is it accurate? How are you integrating this into other activities? And middle piece of building the AI architecture is not under the purview of a CX professional.

One of our big pieces of advice has always been to go find allies in IT. If you don’t have an Adam who naturally bridges both worlds, you can be the Adam for your organization. That technology piece is so essential for scaling and enabling all of your other XM activities and outputs that you really need to bring IT in.

Adam Korengold: And make friends in IT, sometimes. 

Isabelle Zdatny: Make friends in IT! They’re really nice. 

Adam Korengold: Yeah, They are. 

IT experts have, I think, an unfair reputation that they’re inflexible, hard to work with, and not very communicative. But I couldn’t find that to be further from the truth. 

I find that my IT colleagues are so creative, because their job is to find solutions for things. If we hear about some vulnerability in an application that’s critical to us, we find a way to make it work. It can’t not work. And sometimes it takes a while to get there, but we do get there.

Isabelle Zdatny: Fantastic. Well, with that, let me say a huge thank you to Adam for joining us. It was wonderful as always to hear about the great work you’re doing.

Adam Korengold: Thanks for having me again.

 

Check out links on the page to the resources we mentioned in this discussion as well as Adam’s LinkedIn profile